27 research outputs found

    Low-effort place recognition with WiFi fingerprints using deep learning

    Full text link
    Using WiFi signals for indoor localization is the main localization modality of the existing personal indoor localization systems operating on mobile devices. WiFi fingerprinting is also used for mobile robots, as WiFi signals are usually available indoors and can provide rough initial position estimate or can be used together with other positioning systems. Currently, the best solutions rely on filtering, manual data analysis, and time-consuming parameter tuning to achieve reliable and accurate localization. In this work, we propose to use deep neural networks to significantly lower the work-force burden of the localization system design, while still achieving satisfactory results. Assuming the state-of-the-art hierarchical approach, we employ the DNN system for building/floor classification. We show that stacked autoencoders allow to efficiently reduce the feature space in order to achieve robust and precise classification. The proposed architecture is verified on the publicly available UJIIndoorLoc dataset and the results are compared with other solutions

    Advances in autonomy of mobile robots: perception and navigation

    No full text
    Praca przedstawia zarys obecnego stanu bada艅 w dziedzinie robotyki zmierzaj膮cych do zwi臋kszenia zakresu autonomii decyzyjnej robot贸w mobilnych. Na podstawie studi贸w literaturowych oraz do艣wiadcze艅 zespo艂u badawczego Instytutu Automatyki i In偶ynierii Informatycznej (IAiII) Politechniki Pozna艅skiej stawia si臋 tez臋, 偶e zwi臋kszenie autonomii robot贸w mobilnych oraz prze艂amanie istniej膮cych dotychczas ogranicze艅 w zakresie ich funkcji percepcyjnych i nawigacyjnych jest warunkiem szerszego wdro偶enia tych robot贸w w praktycznych zastosowaniach. Przedstawiono dyskusj臋 dotycz膮c膮 obecnych i przysz艂ych zastosowa艅 robot贸w mobilnych oraz wynikaj膮cych z nich potrzeb w zakresie percepcji otoczenia i nawigacji. Uwag臋 skupiono na problemach tr贸jwymiarowej i wielosensorycznej percepcji, metodach budowy modelu otoczenia oraz na zagadnieniu nawigacji autonomicznej w skomplikowanych 艣rodowiskach. Rozwa偶ania zilustrowano wynikami prac badawczych prowadzonych w IAiII.This article provides an outline of the current research efforts in robotics that aim at extending the autonomy of mobile robots. It is argued that on the basis of a literature study and the experience of the author and his co-laborators, that mobile robots require a substantial degree of autonomy to be widely accepted as usefull in practical applications. A paradigm shift is required in perception and navigation methods to take the mobile robots from well-structured laboratories to real environments, which could be unstructured, cluttered and populated by human beigns and other robots. Current and emerging application areas of mobile robots are discussed briefly, and the requirements as to the perception and navigation capabilities are formulated. The discussion is focused on 3D and multi-sensor perception, semantically upgraded environment modells, and navigation algorithms for challenging scenarios. The discussion is illustrated with results of research conducted at the Institute of Control and Information Engineering, Pozna艅 University of Technology

    Uncertainty Models of Vision Sensors in Mobile Robot Positioning

    No full text
    This paper discusses how uncertainty models of vision-based positioning sensors can be used to support the planning and optimization of positioning actions for mobile robots. Two sensor types are considered: a global vision with overhead cameras, and an on-board camera observing artificial landmarks. The developed sensor models are applied to optimize robot positioning actions in a distributed system of mobile robots and monitoring sensors, and to plan the sequence of actions for a robot cooperating with the external infrastructure supporting its navigation

    On qualitative uncertainty in range measurements from 2D laser scanners

    No full text
    This paper describes a research concerning recognition, classification, and correction of qualitative-nature errors in range measurements from 2D laser scanners. Nowadays, such scanners are commonly used on mobile robots for navigation. The main cause of qualitative uncertainty is the mixed measurements. This effect has been investigated experimentally for two different classes of the laser range sensors, and explained by analyzinganalysing the physical phenomena underlying their operation. A local grid map has been proposed as an intermediate data structure, which enables to remove the erroneous range measurements, being either mixed measurements, or being caused by dynamic objects in the vicinity of the robot. A novel fuzzy-set-based algorithm has been employed to update evidence in the grid. The results of tests show, that this algorithm is superior to the common Bayesian approach, when qualitative errors in range measurements are present

    Opis otoczenia na podstawie danych z sensor贸w laserowych i wizyjnych

    No full text
    In this paper we, discuss methods to increase the discriminative properties of the laser-based geometric landmarks used in simultaneous localisation and mapping by employing monocular vision data. Vertical edges extracted from images help to estimate the length of the line segments, which are only partially observed. Salient visual features, which defy simple geometric interpretation, are handled by the scale invariant feature transform method. These different types of photometric features are aggregated together with the basic 2D line segments extracted from the laser scanner data into the Perceptually Rich Segments.W pracy przedstawiono metody poprawiaj膮ce rozr贸偶nialno艣膰 obiekt贸w geometrycznych wyodr臋bnionych z danych uzyskanych ze skanera laserowego i wykorzystywanych w systemie jednoczesnej samolokalizacji i budowy mapy otoczenia robota. Za艂o偶ono, 偶e robot porusza si臋 w 艣rodowisku zbudowanym przez cz艂owieka, w kt贸rym dominuj膮 pionowe p艂aszczyzny (艣ciany). Popraw臋 rozr贸偶nialno艣ci obiekt贸w uzyskano dzi臋ki wykorzystaniu danych z monookularowego systemu wizyjnego robota. Kraw臋dzie pionowe wyodr臋bnione z obraz贸w umo偶liwiaj膮 prawid艂ow膮 estymacj臋 d艂ugo艣ci odcink贸w 2D odtworzonych uprzednio na podstawie danych ze skanera laserowego. Fotometryczne cechy znacz膮ce wyodr臋bniane s膮 z obraz贸w i opisywane za pomoc膮 metody Scale Invariant Feature Transform (SIFT). Uzyskane wektory parametr贸w osadzane s膮 nast臋pnie w "ramach" tworzonych przez odcinki poziome oraz kraw臋dzie pionowe. Powstaj膮 w ten spos贸b obiekty nowego typu - PRS (ang. Perceptually Rich Segment). Zaprezentowano wyniki eksperyment贸w dotycz膮cych wyodr臋bniania i dopasowywania do siebie wektor贸w SIFT oraz wst臋pne wyniki dotycz膮ce budowy modelu otoczenia z u偶yciem obiekt贸w PRS

    A software architecture and teleoperation system for a semi-autonomous walking robot

    No full text
    W pracy przedstawiono architektur臋 najwy偶szej warstwy systemu sterowania i nawigacji sze艣ciono偶nego robota krocz膮cego Messor. Robot ten charakteryzuje si臋 bogatym zestawem sensor贸w i zwi臋kszonym zakresem autonomii decyzyjnej, co pozwala na samodzielne pokonywanie trudnego terenu. Przedstawiony system sterowania zosta艂 zaprojektowany z wykorzystaniem paradygmatu wieloagentowego, co czyni go elastycznym i podatnym na ewentualne rozszerzenia. W artykule przedstawiono struktur臋 systemu oraz wybrane modu艂y funkcjonalne realizuj膮ce zadania nawigacji i wspomagania teleoperatora.In this paper we present a software architecture of the high-level control and navigation system for the semi-autonomous hexapod robot Messor. This architecture integrates a number of sensory-data processing and planning modules into a system that allows the robot to traverse unknown rugged terrain autonomously. The proposed architecture is based on the multi-agent paradigm, which provides flexibility and makes modularization of the software much easier. Selected components of both the navigation and teleoperation system are presented in more details, to illustrate our research and design methodology

    A biologically inspired approach to feasible gait learning for a hexapod robot

    No full text
    The objective of this paper is to develop feasible gait patterns that could be used to control a real hexapod walking robot. These gaits should enable the fastest movement that is possible with the given robot's mechanics and drives on a flat terrain. Biological inspirations are commonly used in the design of walking robots and their control algorithms. However, legged robots differ significantly from their biological counterparts. Hence we believe that gait patterns should be learned using the robot or its simulation model rather than copied from insect behaviour. However, as we have found tahula rasa learning ineffective in this case due to the large and complicated search space, we adopt a different strategy: in a series of simulations we show how a progressive reduction of the permissible search space for the leg movements leads to the evolution of effective gait patterns. This strategy enables the evolutionary algorithm to discover proper leg co-ordination rules for a hexapod robot, using only simple dependencies between the states of the legs and a simple fitness function. The dependencies used are inspired by typical insect behaviour, although we show that all the introduced rules emerge also naturally in the evolved gait patterns. Finally, the gaits evolved in simulations are shown to be effective in experiments on a real walking robot

    An educational quadruped robot with hybrid leg-wheel locomotion

    No full text
    Artyku艂 dotyczy projektu oraz wykonania czworono偶nego robota krocz膮cego wyposa偶onego w hybrydowy, no偶noko艂owy mechanizm lokomocji. Mechanizm ten czyni robota interesuj膮cym obiektem badawczym i dydaktycznym z punktu widzenia sterowania i planowania ruchu. Zaprezentowano budow臋 cz臋艣ci mechanicznej i systemu sterowania robota oraz jego oprogramowanie. Przedstawiono podstawowe tryby ruchu, obejmuj膮ce zar贸wno ruch krocz膮cy (dyskretny), jak i ruch ko艂owy (ci膮g艂y). Mo偶liwo艣ci ruchowe robota zilustrowano tak偶e kr贸tkimi filmami, udost臋pnionymi w Internecie.This paper considers the issues of design and implementation of mechanics, control system and software of a quadruped walking robot. The robot has a hybrid leg-wheel locomotion mechanism, which makes it an interesting subject for studies in robot control and motion planning. The design of the robot's hardware is shown in details, followed by a presentation of the implemented motion strategies, which involve both the legged (discrete) and the wheeled (continuous) modes of locomotion. Results are presented also on movie clips, ahich are made available in the Internet

    Improving Self-Localization Efficiency in a Small Mobile Robot by Using a Hybrid Field of View Vision System

    No full text
    In this article a self-localization system for small mobile robots based on inexpensive cameras and unobtrusive, passive landmarks is presented and evaluated. The main contribution is the experimental evaluation of the hybrid field of view vision system for self-localization with artificial landmarks. The hybrid vision system consists of an omnidirectional, upward-looking camera with a mirror, and a typical, front-view camera. This configuration is inspired by the peripheral and foveal vision co-operation in animals. We demonstrate that the omnidirectional camera enables the robot to detect quickly landmark candidates and to track the already known landmarks in the environment. The front-view camera guided by the omnidirectional information enables precise measurements of the landmark position over extended distances. The passive landmarks are based on QR codes, which makes possible to easily include in the landmark pattern additional information relevant for navigation. We present evaluation of the positioning accuracy of the system mounted on a SanBot Mk II mobile robot. The experimental results demonstrate that the hybrid field of view vision system and the QR code landmarks enable the small mobile robot to navigate safely along extended paths in a typical home environment

    Experimental verification of a walking robot self - localization system with the kinect sensor

    No full text
    In this paper we investigate methods for self-localisation of a walking robot with the Kinect 3D active range sensor. The Iterative Closest Point (ICP) algorithm is considered as the basis for the computation of the robot rotation and translation between two viewpoints. As an alternative, a feature-based method for matching of 3D range data is considered, using the Normal Aligned Radial Feature (NARF) descriptors. Then, it is shown that NARFs can be used to compute a good initial estimate for the ICP algorithm, resulting in convergent estimation of the sensor egomotion. Experimental results are provided
    corecore